Vision Transformer(VIT)在图像处理中变得越来越流行。具体而言,我们研究了测试时间适应(TTA)对VIT的有效性,VIT是一种已经出现的技术,可以自行纠正其在测试时间期间的预测。首先,我们在VIT-B16和VIT-L16上基准了各种测试时间适应方法。结果表明,使用适当的损耗函数时,TTA对VIT有效,并且先前的投入(明智地选择调制参数)是不需要的。基于观察结果,我们提出了一种称为类条件特征对齐(CFA)的新的测试时间适应方法,该方法将类别条件分布的差异和在线源中隐藏表示的整个分布差异最小化,在线中的整个分布差异方式。图像分类任务(CIFAR-10-C,CIFAR-100-C和Imagenet-C)和域适应性(Digits DataSet和Imagenet-Sketch)的实验表明,CFA稳定地超过了各种数据集中的现有基础。我们还通过在RESNET,MLP混合和几种VIT变体(Vit-augreg,Deit和Beit)上实验来验证CFA是模型不可知论。使用BEIT主链,CFA在Imagenet-C上达到了19.8%的TOP-1错误率,表现优于现有的测试时间适应基线44.0%。这是不需要改变训练阶段的TTA方法中的最新结果。
translated by 谷歌翻译
预处理的大语言模型(LLM)广泛用于自然语言处理(NLP)的许多子场,通常被称为具有特定任务示例的优秀少数学习者。值得注意的是,思想链(COT)提示,这是一种通过分步答案示例引发复杂的多步推理的技术,在算术和符号推理中实现了最新的表演,难以置信的System-2任务不遵循LLMS的标准缩放定律。尽管这些成功通常归因于LLM的几次学习能力,但我们表明,LLM是通过在每个答案之前简单地添加“让我们逐步思考”而成为不错的零射击推理者。实验结果表明,使用相同的单个提示模板,我们的零射击功能明显优于零摄像机LLM在不同的基准推理任务上的零摄像机表现,包括算术(Multiarith,GSM8K,Aqua-Rat,SVAMP,SVAMP),符号推理(最后一个字母,字母,字母,字母,,,,,字母,字母)(最后一个字母),硬币翻转)和其他逻辑推理任务(日期理解,跟踪洗牌对象),而没有任何手工制作的几个示例,例如通过175B参数指令gpt模型将Multiarith的准确性从17.7%提高到78.7%,GSM8K从10.4%提高到40.7%,以及另一种现成的大型模型,540B参数Palm Palm的相似改进。在非常多样化的推理任务中,这个单一提示的多功能性暗示了LLM的尚未开发和研究的基本零拍功能,这表明可以通过简单提示来提取高级,多任务的广泛认知能力。我们希望我们的工作不仅可以作为具有挑战性的推理基准的最小零击基线,而且还强调了仔细探索和分析LLM中隐藏在LLM中的巨大的零拍知识的重要性,然后在制作Finetunning数据集或少数拍摄的典范之前。
translated by 谷歌翻译
Drug repositioning holds great promise because it can reduce the time and cost of new drug development. While drug repositioning can omit various R&D processes, confirming pharmacological effects on biomolecules is essential for application to new diseases. Biomedical explainability in a drug repositioning model can support appropriate insights in subsequent in-depth studies. However, the validity of the XAI methodology is still under debate, and the effectiveness of XAI in drug repositioning prediction applications remains unclear. In this study, we propose GraphIX, an explainable drug repositioning framework using biological networks, and quantitatively evaluate its explainability. GraphIX first learns the network weights and node features using a graph neural network from known drug indication and knowledge graph that consists of three types of nodes (but not given node type information): disease, drug, and protein. Analysis of the post-learning features showed that node types that were not known to the model beforehand are distinguished through the learning process based on the graph structure. From the learned weights and features, GraphIX then predicts the disease-drug association and calculates the contribution values of the nodes located in the neighborhood of the predicted disease and drug. We hypothesized that the neighboring protein node to which the model gave a high contribution is important in understanding the actual pharmacological effects. Quantitative evaluation of the validity of protein nodes' contribution using a real-world database showed that the high contribution proteins shown by GraphIX are reasonable as a mechanism of drug action. GraphIX is a framework for evidence-based drug discovery that can present to users new disease-drug associations and identify the protein important for understanding its pharmacological effects from a large and complex knowledge base.
translated by 谷歌翻译
We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram .
translated by 谷歌翻译
We propose a light-weight and highly efficient Joint Detection and Tracking pipeline for the task of Multi-Object Tracking using a fully-transformer architecture. It is a modified version of TransTrack, which overcomes the computational bottleneck associated with its design, and at the same time, achieves state-of-the-art MOTA score of 73.20%. The model design is driven by a transformer based backbone instead of CNN, which is highly scalable with the input resolution. We also propose a drop-in replacement for Feed Forward Network of transformer encoder layer, by using Butterfly Transform Operation to perform channel fusion and depth-wise convolution to learn spatial context within the feature maps, otherwise missing within the attention maps of the transformer. As a result of our modifications, we reduce the overall model size of TransTrack by 58.73% and the complexity by 78.72%. Therefore, we expect our design to provide novel perspectives for architecture optimization in future research related to multi-object tracking.
translated by 谷歌翻译
We present lilGym, a new benchmark for language-conditioned reinforcement learning in visual environments. lilGym is based on 2,661 highly-compositional human-written natural language statements grounded in an interactive visual environment. We annotate all statements with executable Python programs representing their meaning to enable exact reward computation in every possible world state. Each statement is paired with multiple start states and reward functions to form thousands of distinct Markov Decision Processes of varying difficulty. We experiment with lilGym with different models and learning regimes. Our results and analysis show that while existing methods are able to achieve non-trivial performance, lilGym forms a challenging open problem. lilGym is available at https://lil.nlp.cornell.edu/lilgym/.
translated by 谷歌翻译
在一系列软物质系统中广泛观察到玻璃过渡。但是,尽管有多年的研究,这些转变的物理机制仍然未知。特别是,一个重要的未解决的问题是玻璃转变是否伴随着特征静态结构的相关长度的分歧。最近,提出了一种可以从纯精度的纯静态信息中预测长期动态的方法。但是,即使是这种方法也不通用,并且对于KOB(Andersen系统)而言,这是典型的玻璃形成液体模型。在这项研究中,我们开发了一种使用机器学习或尤其是卷积神经网络提取眼镜的特征结构的方法。特别是,我们通过量化网络做出的决策的理由来提取特征结构。我们考虑了两个质量不同的玻璃形成二进制系统,并通过与几个既定结构指标进行比较,我们证明我们的系统可以识别依赖于系统细节的特征结构。令人惊讶的是,提取的结构与热波动中的非平衡衰老动力学密切相关。
translated by 谷歌翻译
步态计划是一种通常应用于地面机器人的过程,例如四足机器人; Tilt-Rotor是一种新型的四型四个输入,不是其中之一。在控制倾斜 - 依赖反馈线性化的倾斜旋转时,预计倾斜角度(输入)将过度改变,这在应用程序中可能不会预期。为了帮助抑制倾斜角度的密集变化,在反馈线性化之前,将步态计划程序引入倾斜度。用户提前时间指定倾斜角度,而不是由控制规则给出。但是,基于这种情况,反馈线性化中的去耦矩阵对于某些态度,滚动角度和螺距角的组合可能是单数的。它阻碍了反馈线性化的进一步应用。因此,建立了两个彩色图定理,以最大程度地提高可接受的态度区域,在该区域中,滚动和音高的组合将产生可逆的去耦矩阵。然而,该定理过度限制了倾斜角度的选择,这可以排除一些可行的健壮步态。本文给出了广义的两个彩色图定理。所有健壮的步态都可以根据这种广义定理找到。分析了满足该广义的两个彩色图定理(违反两个彩色图定理)的三个步态的鲁棒性。结果表明,概括的两个颜色图定理完成了对倾斜旋转的稳健步态的搜索。
translated by 谷歌翻译
从观察到的时间序列数据中学习稳定的动态是机器人技术,物理建模和系统生物学中的重要问题。这些动态中的许多被表示为与外部环境通信的输入输出系统。在这项研究中,我们专注于投入输出稳定系统,表现出对意外刺激和噪声的鲁棒性。我们提出了一种学习保证输入输出稳定性的非线性系统的方法。我们提出的方法利用了满足汉密尔顿 - 雅各比不平等的空间上的可区分投影来实现输入输出稳定性。找到该投影的问题可以作为二次约束二次编程问题,并分析得出特定的解决方案。此外,我们将方法应用于玩具双基生模型以及训练由葡萄糖胰岛素模拟器产生的基准测试的任务。结果表明,通过我们的方法,具有神经网络的非线性系统可以达到输入输出稳定性,这与天真的神经网络不同。我们的代码可在https://github.com/clinfo/deepiostability上找到。
translated by 谷歌翻译
这项工作与发现物理系统的偏微分方程(PDE)有关。现有方法证明了有限观察结果的PDE识别,但未能保持令人满意的噪声性能,部分原因是由于次优估计衍生物并发现了PDE系数。我们通过引入噪音吸引物理学的机器学习(NPIML)框架来解决问题,以在任意分布后从数据中发现管理PDE。我们的建议是双重的。首先,我们提出了几个神经网络,即求解器和预选者,这些神经网络对隐藏的物理约束产生了可解释的神经表示。在经过联合训练之后,求解器网络将近似潜在的候选物,例如部分衍生物,然后将其馈送到稀疏的回归算法中,该算法最初公布了最有可能的PERSIMISIAL PDE,根据信息标准决定。其次,我们提出了基于离散的傅立叶变换(DFT)的Denoising物理信息信息网络(DPINNS),以提供一组最佳的鉴定PDE系数,以符合降低降噪变量。 Denoising Pinns的结构被划分为前沿投影网络和PINN,以前学到的求解器初始化。我们对五个规范PDE的广泛实验确认,该拟议框架为PDE发现提供了一种可靠,可解释的方法,适用于广泛的系统,可能会因噪声而复杂。
translated by 谷歌翻译